Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 6, 2026
-
Free, publicly-accessible full text available March 31, 2026
-
Eco-driving has garnered considerable research attention owing to its potential socio-economic impact, including enhanced public health and mitigated climate change effects through the reduction of greenhouse gas emissions. With an expectation of more autonomous vehicles (AVs) on the road, an eco-driving strategy in hybrid traffic networks encompassing AV and human-driven vehicles (HDVs) with the coordination of traffic lights is a challenging task. The challenge is partially due to the insufficient infrastructure for collecting, transmitting, and sharing real-time traffic data among vehicles, facilities, and traffic control centers, and the following decision-making of agents involved in traffic control. Additionally, the intricate nature of the existing traffic network, with its diverse array of vehicles and facilities, contributes to the challenge by hindering the development of a mathematical model for accurately characterizing the traffic network. In this study, we utilized the Simulation of Urban Mobility (SUMO) simulator to tackle the first challenge through computational analysis. To address the second challenge, we employed a model-free reinforcement learning (RL) algorithm, proximal policy optimization, to decide the actions of AV and traffic light signals in a traffic network. A novel eco-driving strategy was proposed by introducing different percentages of AV into the traffic flow and collaborating with traffic light signals using RL to control the overall speed of the vehicles, resulting in improved fuel consumption efficiency. Average rewards with different penetration rates of AV (5%, 10%, and 20% of total vehicles) were compared to the situation without any AV in the traffic flow (0% penetration rate). The 10% penetration rate of AV showed a minimum time of convergence to achieve average reward, leading to a significant reduction in fuel consumption and total delay of all vehicles.more » « less
-
As the next-generation battery substitute for IoT system, energy harvesting (EH) technology revolutionizes the IoT industry with environmental friendliness, ubiquitous accessibility, and sustainability, which enables various self-sustaining IoT applications. However, due to the weak and intermittent nature of EH power, the performance of EH-powered IoT systems as well as its collaborative routing mechanism can severely deteriorate, rendering unpleasant data package loss during each power failure. Such a phenomenon makes conventional routing policies and energy allocation strategies impractical. Given the complexity of the problem, reinforcement learning (RL) appears to be one of the most promising and applicable methods to address this challenge. Nevertheless, although the energy allocation and routing policy are jointly optimized by the RL method, due to the energy restriction of EH devices, the inappropriate configuration of multi-hop network topology severely degrades the data collection performance. Therefore, this article first conducts a thorough mathematical discussion and develops the topology design and validation algorithm under energy harvesting scenarios. Then, this article developsDeepIoTRouting, a distributed and scalable deep reinforcement learning (DRL)-based approach, to address the routing and energy allocation jointly for the energy harvesting powered distributed IoT system. The experimental results show that with topology optimization,DeepIoTRoutingachieves at least 38.71% improvement on the amount of data delivery to sink in a 20-device IoT network, which significantly outperforms state-of-the-art methods.more » « less
An official website of the United States government

Full Text Available